Recent progress in pre-trained language models led to systems that are able to generate text of an increasingly high quality. While several works have investigated the fluency and grammatical correctness of such models, it is still unclear to which extent the generated text is consistent with factual world knowledge. Here, we go beyond fluency and also investigate the verifiability of text generated by state-of-the-art pre-trained language models. A generated sentence is verifiable if it can be corroborated or disproved by Wikipedia, and we find that the verifiability of generated text strongly depends on the decoding strategy. In particular, we discover a tradeoff between factuality (i.e., the ability of generating Wikipedia corroborated text) and repetitiveness. While decoding strategies such as top-k and nucleus sampling lead to less repetitive generations, they also produce less verifiable text. Based on these finding, we introduce a simple and effective decoding strategy which, in comparison to previously used decoding strategies, produces less repetitive and more verifiable text.

How Decoding Strategies Affect the Verifiability of Generated Text / Massarelli, Luca; Petroni, Fabio; Piktus, Aleksandra; Ott, Myle; Rocktaschel, Tim; Plachouras, Vassilis; Silvestri, Fabrizio; Riedel, Sebastian. - (2020), pp. 223-235. (Intervento presentato al convegno Findings of the Association for Computational Linguistics: EMNLP 2020 tenutosi a Online) [10.18653/v1/2020.findings-emnlp.22].

How Decoding Strategies Affect the Verifiability of Generated Text

Luca Massarelli
;
Fabrizio Silvestri
;
2020

Abstract

Recent progress in pre-trained language models led to systems that are able to generate text of an increasingly high quality. While several works have investigated the fluency and grammatical correctness of such models, it is still unclear to which extent the generated text is consistent with factual world knowledge. Here, we go beyond fluency and also investigate the verifiability of text generated by state-of-the-art pre-trained language models. A generated sentence is verifiable if it can be corroborated or disproved by Wikipedia, and we find that the verifiability of generated text strongly depends on the decoding strategy. In particular, we discover a tradeoff between factuality (i.e., the ability of generating Wikipedia corroborated text) and repetitiveness. While decoding strategies such as top-k and nucleus sampling lead to less repetitive generations, they also produce less verifiable text. Based on these finding, we introduce a simple and effective decoding strategy which, in comparison to previously used decoding strategies, produces less repetitive and more verifiable text.
2020
Findings of the Association for Computational Linguistics: EMNLP 2020
automatic text generation
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
How Decoding Strategies Affect the Verifiability of Generated Text / Massarelli, Luca; Petroni, Fabio; Piktus, Aleksandra; Ott, Myle; Rocktaschel, Tim; Plachouras, Vassilis; Silvestri, Fabrizio; Riedel, Sebastian. - (2020), pp. 223-235. (Intervento presentato al convegno Findings of the Association for Computational Linguistics: EMNLP 2020 tenutosi a Online) [10.18653/v1/2020.findings-emnlp.22].
File allegati a questo prodotto
File Dimensione Formato  
Massarelli_How_2020.pdf

solo gestori archivio

Note: https://aclanthology.org/2020.findings-emnlp.22
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 556.77 kB
Formato Adobe PDF
556.77 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1481935
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 20
  • ???jsp.display-item.citation.isi??? 8
social impact